Learning from non-irreducible Markov chains

نویسندگان

چکیده

Most of the existing literature on supervised machine learning problems focuses case when training data set is drawn from an i.i.d. sample. However, many practical are characterized by temporal dependence and strong correlation between marginals data-generating process, suggesting that assumption not always justified. This problem has been already considered in context Markov chains satisfying Doeblin condition. condition, among other things, implies chain singular its behavior, i.e. it irreducible. In this article, we focus a necessarily irreducible chain. Under uniformly ergodic with respect to L1-Wasserstein distance, certain regularity assumptions hypothesis class state space chain, first obtain uniform convergence result for corresponding sample error, then conclude learnability approximate error minimization algorithm find generalization bounds. At end, relative also discussed.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Rate of Rényi Entropy for Irreducible Markov Chains

In this paper, we obtain the Rényi entropy rate for irreducible-aperiodic Markov chains with countable state space, using the theory of countable nonnegative matrices. We also obtain the bound for the rate of Rényi entropy of an irreducible Markov chain. Finally, we show that the bound for the Rényi entropy rate is the Shannon entropy rate.

متن کامل

Model Reduction of Irreducible Markov Chains

We are interested in developing computational tools for reducing the state space of irreducible Markov chains. As means of decreasing the dimensionality of a given Markov chain we study the concept of aggregation. The approximation error between the original and the reduced order model is captured by a metric that penalizes the asymptotic deviation of the outputs of the two systems. For the cas...

متن کامل

Learning from uniformly ergodic Markov chains

Evaluation for generalization performance of learning algorithms has been themain thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually establishedbased on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by esta...

متن کامل

Any Two Irreducible Markov Chains Are Finitarily Orbit Equivalent

Two invertible dynamical systems (X,A, μ, T ) and (Y,B, ν, S) where X, Y are metrizable spaces and T , S are homeomorphisms on X and Y , are said to be finitarily orbit equivalent if there exists an invertible measure preserving mapping φ from a subset X0 of X of full measure to a subset Y0 of Y of full measure such that φ|X0 is continuous in the relative topology on X0, φ|Y0 is continuous in t...

متن کامل

On Non-reversible Markov Chains

Reversibility is a suucient but not necessary condition for Markov chains for use in Markov chain Monte Carlo simulation. It is necessary to select a Markov chain that has a pre-speciied distribution as its unique stationary distribution. There are many Markov chains that have such property. We give guidelines on how to rank them based on the asymptotic variance of the estimates they produce. T...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Mathematical Analysis and Applications

سال: 2023

ISSN: ['0022-247X', '1096-0813']

DOI: https://doi.org/10.1016/j.jmaa.2023.127049